Self-driving car Nanodegree - Term 1

Project 0: Finding Lane Lines on the Road


When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are act as our constant reference for where to steer the vehicle. Naturally, one of the first things we would like to do in developing a self-driving car is to automatically detect lane lines using an algorithm.

The project objective is to detect lane lines in images using Python and OpenCV.

Author : Vu Tran


Importing packages


In [1]:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import math
import os
%matplotlib inline
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML

Build a Lane Finding Pipeline

My pipeline consisted of 6 steps:

  • Firstly , I grayscaled the image with cv2.

  • Next, I applied gaussian blur before applyin Canny Edge Detection, this is to further smooth the image in hope of better result, the kernel size was 3 without any tuning.

  • Next, I applied Canny Edge Detection - one of main technique from the course to the image; I spent some time to find the min and max threshold which reflected the best result from my observation. The final values were 80 and 240 which corresponded to recommended ratio 1:3.

  • The next step was to define the region on interest bounded with lane lines. I used 4-sided polygon to mask the region. the bottom 2 points were simply both ends of the image frame, while the other top two points were found using observation with a few trial and errors.

  • The next step was to do Hough Transformation, not much parameters tuning here, i mainly use what was recommended from the course lectures. It wokred quite fine anyway.

  • The final step was to draw the lines obtained from Hough Transformation in function draw_lines. I first calculated the slope from the 2 points of any line obtained from perious steps using the formular (y2-y1)/(x2-x1). If the slope is positive , I would put those point in a list of right lines and vice versa, I discarded points woth zero slope as they are horizonetal lines which are kind of noise. Using these 2 right and left lists, I find the coefficient of 1st-degree polynomial and then construct the left and right lines function. Next step was to fin y coordinates with x-coordinates of 2 left points and then 2 right points. With the list of all the (x,y) coordinates for both right and left lines, I simple draw the lines


Helper functions


In [1]:
def grayscale(img):
    """Applies the Grayscale transform
    This will return an image with only one color channel
    but NOTE: to see the returned image as grayscale
    (assuming your grayscaled image is called 'gray')
    you should call plt.imshow(gray, cmap='gray')"""
    return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    # Or use BGR2GRAY if you read an image with cv2.imread()
    # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
def canny(img, low_threshold, high_threshold):
    """Applies the Canny transform"""
    return cv2.Canny(img, low_threshold, high_threshold)

def gaussian_blur(img, kernel_size):
    """Applies a Gaussian Noise kernel"""
    return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)

def region_of_interest(img, vertices):
    """
    Applies an image mask.
    
    Only keeps the region of the image defined by the polygon
    formed from `vertices`. The rest of the image is set to black.
    """
    #defining a blank mask to start with
    mask = np.zeros_like(img)   
    
    #defining a 3 channel or 1 channel color to fill the mask with depending on the input image
    if len(img.shape) > 2:
        channel_count = img.shape[2]  # i.e. 3 or 4 depending on your image
        ignore_mask_color = (255,) * channel_count
    else:
        ignore_mask_color = 255
        
    #filling pixels inside the polygon defined by "vertices" with the fill color    
    cv2.fillPoly(mask, vertices, ignore_mask_color)
    
    #returning the image only where mask pixels are nonzero
    masked_image = cv2.bitwise_and(img, mask)
    return masked_image



def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
    """
    `img` should be the output of a Canny transform.
        
    Returns an image with hough lines drawn.
    """
    lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
    line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
    draw_lines(line_img, lines)
    return line_img


def draw_lines(img, lines, color=[255, 0, 0], thickness = 10):
    """
    NOTE: this is the function to use as a starting point once you want to 
    average/extrapolate the line segments you detect to map out the full
    extent of the lane (going from the result shown in raw-lines-example.mp4
    to that shown in P1_example.mp4).  
    
    Think about things like separating line segments by their 
    slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
    line vs. the right line.  Then, you can average the position of each of 
    the lines and extrapolate to the top and bottom of the lane.
    
    This function draws `lines` with `color` and `thickness`.    
    Lines are drawn on the image inplace (mutates the image).
    If you want to make the lines semi-transparent, think about combining
    this function with the weighted_img() function below
    """
    left_x = []
    left_y = []
    
    right_x = []
    right_y = []
    
    for line in lines:
        for x1,y1,x2,y2 in line:
            # 1. we dont need horizontal; i.e slope = 0  they are noise
            # 2. slope >0 is right ; slope <0 is left
            slope = ((y2-y1)/(x2-x1))
            if (slope < 0):
                left_x.append(x1)
                left_x.append(x2)
                left_y.append(y1)
                left_y.append(y2)
            elif (slope > 0):
                right_x.append(x1)
                right_x.append(x2)
                right_y.append(y1)
                right_y.append(y2)
                
    if (len(left_x) > 0  and len(left_y) > 0):
        # find coefficient
        coeff_left = np.polyfit(left_x, left_y, 1)
        # construct y =xa +b
        func_left = np.poly1d(coeff_left)
        x1L = int(func_left(0))
        x2L = int(func_left(460))
        cv2.line(img, (0, x1L), (460, x2L), color, thickness)

    
    if (len(right_x) > 0  and len(right_y) > 0):  
        # find coefficient
        coeff_right = np.polyfit(right_x, right_y, 1)
        # construct y =xa +b
        func_right = np.poly1d(coeff_right)
        x1R = int(func_right(500))
        x2R = int(func_right(img.shape[1]))
        cv2.line(img, (500, x1R), (img.shape[1], x2R), color, thickness)
                
# Python 3 has support for cool math symbols.

def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
    """
    `img` is the output of the hough_lines(), An image with lines drawn on it.
    Should be a blank image (all black) with lines drawn on it.
    
    `initial_img` should be the image before any processing.
    
    The result image is computed as follows:
    
    initial_img * α + img * β + λ
    NOTE: initial_img and img must be the same shape!
    """
    return cv2.addWeighted(initial_img, α, img, β, λ)

Testing with 1 sampe image


In [3]:
#reading in an image
image = mpimg.imread('../test_images/solidWhiteRight.jpg')

img = grayscale(image)
# further smoothing to blur image for better result
img = gaussian_blur(img, kernel_size = 3)
img = canny(img, low_threshold = 80, high_threshold = 240)


# This time we are defining a four sided polygon to mask
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(460, 320), (500, 320), (imshape[1],imshape[0])]], dtype=np.int32)
img = region_of_interest(img, vertices)

# Hough transform
line_image = hough_lines(img, rho = 1, theta= np.pi/180, threshold = 50, min_line_len = 40, max_line_gap = 20)

# Draw the lines on the edge image
img = weighted_img(line_image, image , α=0.8, β=1., λ=0.)
plt.imsave("../test_images_output/solidWhiteRightOutput.jpg", img)
plt.imshow(img)
x = [vertices[0][0][0], vertices[0][1][0], vertices[0][2][0], vertices[0][3][0]]
y = [vertices[0][0][1], vertices[0][1][1], vertices[0][2][1], vertices[0][3][1]]
plt.plot(x, y, 'b--', lw=4)


Out[3]:
[<matplotlib.lines.Line2D at 0x1ed30e5f2b0>]

Test on Videos

We can test our solution on two provided videos:

solidWhiteRight.mp4

solidYellowLeft.mp4

Let's try the one with the solid white lane on the right first ...


In [4]:
def process_image(image):
    # NOTE: The output you return should be a color image (3 channel) for processing video below
    # you should return the final output (image where lines are drawn on lanes)
    img = grayscale(image)
    # further smoothing to blur image for better result
    img = gaussian_blur(img, kernel_size = 3)
    img = canny(img, low_threshold = 80, high_threshold = 240)

    # This time we are defining a four sided polygon to mask
    imshape = image.shape
    vertices = np.array([[(0,imshape[0]),(460, 320), (500, 320), (imshape[1],imshape[0])]], dtype=np.int32)
    img = region_of_interest(img, vertices)

    # Hough transform
    line_image = hough_lines(img, rho = 2, theta= np.pi/180, threshold = 50, min_line_len = 40, max_line_gap = 20)

    # Draw the lines on the edge image
    final = weighted_img(line_image, image, α = 0.8, β = 1., λ = 0.)
    return final

white_line_output = '../test_videos_output/solidWhiteRight.mp4'
clip1 = VideoFileClip('../test_videos/solidWhiteRight.mp4')
white_line_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_line_clip.write_videofile(white_line_output, audio=False)


[MoviePy] >>>> Building video ../test_videos_output/solidWhiteRight.mp4
[MoviePy] Writing video ../test_videos_output/solidWhiteRight.mp4
100%|█████████▉| 221/222 [00:05<00:00, 32.29it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: ../test_videos_output/solidWhiteRight.mp4 

Wall time: 6.95 s

Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.


In [5]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(white_line_output))


Out[5]:

Here is the youtube link of the video.


Improve the draw_lines() function

Now for the one with the solid yellow lane on the left. This one's more tricky!


In [6]:
yellow_output = '../test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('../test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)


[MoviePy] >>>> Building video ../test_videos_output/solidYellowLeft.mp4
[MoviePy] Writing video ../test_videos_output/solidYellowLeft.mp4
100%|█████████▉| 681/682 [00:21<00:00, 31.14it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: ../test_videos_output/solidYellowLeft.mp4 

Wall time: 23.8 s

In [7]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(yellow_output))


Out[7]:

Here is the youtube link of the video.

Optional Challenge


In [8]:
challenge_output = '../test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('../test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)


[MoviePy] >>>> Building video ../test_videos_output/challenge.mp4
[MoviePy] Writing video ../test_videos_output/challenge.mp4
100%|██████████| 251/251 [00:16<00:00, 12.28it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: ../test_videos_output/challenge.mp4 

Wall time: 19.4 s

In [9]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(challenge_output))


Out[9]:

Potential shortcomings in the pipeline

Looking from the output videos, the lines seemed skewed sometimes. Also, it did not perform well on challenge video where the car was keeping turning; i.e the lan lines are very curved


Possible improvements to pipeline

One possible improvement was to apply the color filter to identify the lane lines.